5,852 research outputs found
Recommended from our members
GPERF : a perfect hash function generator
gperf is a widely available perfect hash function generator written in C++. It automates a common system software operation: keyword recognition. gperf translates an n element user-specified keyword list keyfile into source code containing a k element lookup table and a pair of functions, phash and in_word_set. phash uniquely maps keywords in keyfile onto the range 0 .. k - 1, where k >/= n. If k = n, then phash is considered a minimal perfect hash function. in_word_set uses phash to determine whether a particular string of characters str occurs in the keyfile, using at most one string comparison.This paper describes the user-interface, options, features, algorithm design and implementation strategies incorporated in gperf. It also presents the results from an empirical comparison between gperf-generated recognizers and other popular techniques for reserved word lookup
Recommended from our members
Computing infrastructure issues in distributed communications systems : a survey of operating system transport system architectures
The performance of distributed applications (such as file transfer, remote login, tele-conferencing, full-motion video, and scientific visualization) is influenced by several factors that interact in complex ways. In particular, application performance is significantly affected both by communication infrastructure factors and computing infrastructure factors. Several communication infrastructure factors include channel speed, bit-error rate, and congestion at intermediate switching nodes. Computing infrastructure factors include (among other things) both protocol processing activities (such as connection management, flow control, error detection, and retransmission) and general operating system factors (such as memory latency, CPU speed, interrupt and context switching overhead, process architecture, and message buffering). Due to a several orders of magnitude increase in network channel speed and an increase in application diversity, performance bottlenecks are shifting from the network factors to the transport system factors.This paper defines an abstraction called an "Operating System Transport System Architecture" (OSTSA) that is used to classify the major components and services in the computing infrastructure. End-to-end network protocols such as TCP, TP4, VMTP, XTP, and Delta-t typically run on general-purpose computers, where they utilize various operating system resources such as processors, virtual memory, and network controllers. The OSTSA provides services that integrate these resources to support distributed applications running on local and wide area networks.A taxonomy is presented to evaluate OSTSAs in terms of their support for protocol processing activities. We use this taxonomy to compare and contrast five general-purpose commercial and experimental operating systems including System V UNIX, BSD UNIX, the x-kernel, Choices, and Xinu
The Data Distribution Service – The Communication Middleware Fabric for Scalable and Extensible Systems-of-Systems
During the past several decades techniques and technologies have emerged to design and implement distributed systems effectively. A remaining challenge, however, to devising techniques and technologies that will help design and implement SoSs. SoSs present some unique challenges when compared to traditional systems since their scale, heterogeneity, extensibility, and evolvability requirement
Deconstructing (2,0) proposals
C. P. is supported by the U.S. Department of Energy under
Grant No. DE-FG02-96ER40959. M. S. S. is supported by
an EURYI award of the European Science Foundatio
08331 Abstracts Collection -- Perspectives Workshop: Model Engineering of Complex Systems (MECS)
From 10.08. to 13.08.2008, the Dagstuhl Seminar 08331 ``Perspectives Workshop: Model Engineering of Complex Systems (MECS)\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Automated Reasoning for Multi-step Feature Model Configuration Problems
The increasing complexity and cost of software-intensive
systems has led developers to seek ways of increasing software
reusability. One software reuse approach is to develop
a Software Product-line (SPL), which is a reconfigurable
software architecture that can be reused across projects.
Creating configurations of the SPL that meets arbitrary requirements
is hard.
Existing research has focused on techniques that produce
a configuration of the SPL in a single step. This paper
provides three contributions to the study of multi-step configuration
for SPLs. First, we present a formal model of
multi-step SPL configuration and map this model to constraint
satisfaction problems (CSPs). Second, we show how
solutions to these CSP configuration problem CSPs can be
derived automatically with a constraint solver. Third, we
present empirical results demonstrating that our CSP-based
technique can solve multi-step configuration problems involving
hundreds of features in seconds.Comisión Interministerial de Ciencia y Tecnología TIN2006-00472Junta de Andalucía TIC-253
NL2CMD: An Updated Workflow for Natural Language to Bash Commands Translation
Translating natural language into Bash Commands is an emerging research field
that has gained attention in recent years. Most efforts have focused on
producing more accurate translation models. To the best of our knowledge, only
two datasets are available, with one based on the other. Both datasets involve
scraping through known data sources (through platforms like stack overflow,
crowdsourcing, etc.) and hiring experts to validate and correct either the
English text or Bash Commands.
This paper provides two contributions to research on synthesizing Bash
Commands from scratch. First, we describe a state-of-the-art translation model
used to generate Bash Commands from the corresponding English text. Second, we
introduce a new NL2CMD dataset that is automatically generated, involves
minimal human intervention, and is over six times larger than prior datasets.
Since the generation pipeline does not rely on existing Bash Commands, the
distribution and types of commands can be custom adjusted. Our empirical
results show how the scale and diversity of our dataset can offer unique
opportunities for semantic parsing researchers
- …